6,610 research outputs found
Multivariable Mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects.
A conventional Mendelian randomization analysis assesses the causal effect of a risk factor on an outcome by using genetic variants that are solely associated with the risk factor of interest as instrumental variables. However, in some cases, such as the case of triglyceride level as a risk factor for cardiovascular disease, it may be difficult to find a relevant genetic variant that is not also associated with related risk factors, such as other lipid fractions. Such a variant is known as pleiotropic. In this paper, we propose an extension of Mendelian randomization that uses multiple genetic variants associated with several measured risk factors to simultaneously estimate the causal effect of each of the risk factors on the outcome. This "multivariable Mendelian randomization" approach is similar to the simultaneous assessment of several treatments in a factorial randomized trial. In this paper, methods for estimating the causal effects are presented and compared using real and simulated data, and the assumptions necessary for a valid multivariable Mendelian randomization analysis are discussed. Subject to these assumptions, we demonstrate that triglyceride-related pathways have a causal effect on the risk of coronary heart disease independent of the effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol.Dr. Stephen Burgess is supported by a fellowship from the
Wellcome Trust (100114).This is the final version. It was first published by OUP at http://aje.oxfordjournals.org/content/181/4/251.long
Evolving Recursive Programs using Non-recursive Scaffolding
Genetic programming has proven capable of evolving solutions to a wide variety of problems. However, the successes have largely been with programs without iteration or recursion; evolving recursive programs has turned out to be particularly challenging. The main obstacle to evolving recursive programs seems to be that they are particularly fragile to the application of search operators: a small change in a correct recursive program generally produces a completely wrong program. In this paper, we present a simple and general method that allows us to pass back and forth from a recursive program to an associated non-recursive program. Finding a recursive program can be reduced to evolving non-recursive programs followed by converting the optimum non-recursive program found to the associated optimum recursive program. This avoids the fragility problem above, as evolution does not search the space of recursive programs. We present promising experimental results on a test-bed of recursive problems
Recommended from our members
A review of instrumental variable estimators for Mendelian randomization.
Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.Stephen Burgess is supported by the Wellcome Trust (grant number 100114). Dylan Small was supported by a grant from the US National Science Foundation Measurement, Methodology and Statistics program. Simon G. Thompson is supported by the British Heart Foundation (grant number CH/12/2/29428).This is the final version of the article. It was first available from SAGE via http://dx.doi.org/10.1177/096228021559757
Modeling the costs and long-term health benefits of screening the general population for risks of cardiovascular disease: a review of methods used in the literature.
BACKGROUND: Strategies for screening and intervening to reduce the risk of cardiovascular disease (CVD) in primary care settings need to be assessed in terms of both their costs and long-term health effects. We undertook a literature review to investigate the methodologies used. METHODS: In a framework of developing a new health-economic model for evaluating different screening strategies for primary prevention of CVD in Europe (EPIC-CVD project), we identified seven key modeling issues and reviewed papers published between 2000 and 2013 to assess how they were addressed. RESULTS: We found 13 relevant health-economic modeling studies of screening to prevent CVD in primary care. The models varied in their degree of complexity, with between two and 33 health states. Programmes that screen the whole population by a fixed cut-off (e.g., predicted 10-year CVD risk >20Â %) identify predominantly elderly people, who may not be those most likely to benefit from long-term treatment. Uncertainty and model validation were generally poorly addressed. Few studies considered the disutility of taking drugs in otherwise healthy individuals or the budget impact of the programme. CONCLUSIONS: Model validation, incorporation of parameter uncertainty, and sensitivity analyses for assumptions made are all important components of model building and reporting, and deserve more attention. Complex models may not necessarily give more accurate predictions. Availability of a large enough source dataset to reliably estimate all relevant input parameters is crucial for achieving credible results. Decision criteria should consider budget impact and the medicalization of the population as well as cost-effectiveness thresholds.This work was financially supported by the EPIC-CVD project. EPIC-CVD is a European Commision funded project under the Health theme of Seventh Framework Programme that builds on EPIC-Heart, which has been funded by the Medical Research Council, the British Heart Foundation and European Research Council Advanced Investigator Award.This is the author accepted manuscript. It is currently embargoed pending publication
Recommended from our members
The use of repeated blood pressure measures for cardiovascular risk prediction: a comparison of statistical models in the ARIC study.
Many prediction models have been developed for the risk assessment and the prevention of cardiovascular disease in primary care. Recent efforts have focused on improving the accuracy of these prediction models by adding novel biomarkers to a common set of baseline risk predictors. Few have considered incorporating repeated measures of the common risk predictors. Through application to the Atherosclerosis Risk in Communities study and simulations, we compare models that use simple summary measures of the repeat information on systolic blood pressure, such as (i) baseline only; (ii) last observation carried forward; and (iii) cumulative mean, against more complex methods that model the repeat information using (iv) ordinary regression calibration; (v) risk-set regression calibration; and (vi) joint longitudinal and survival models. In comparison with the baseline-only model, we observed modest improvements in discrimination and calibration using the cumulative mean of systolic blood pressure, but little further improvement from any of the complex methods. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.J.K.B. was supported by the Medical Research Council grant numbers G0902100 and MR/K014811/1. This work was funded by the UK Medical Research Council (G0800270), British Heart Foundation (SP/09/002), UK National Institute for Health Research Cambridge Biomedical Research Centre, European Research Council (268834) and European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The ARIC study is carried out as a collaborative study supported by the National Heart, Lung, and Blood Institute contracts (HHSN268201100005C, HHSN268201100006C, HHSN268201100007C, HHSN268201100008C, HHSN268201100009C, HHSN268201100010C, HHSN268201100011C and HHSN268201100012C).This is the final version of the article. It first appeared from Wiley via https://doi.org/10.1002/sim.714
Southward expansion: The myth of the West in the promotion of Florida, 1876â1900
This article examines the ways in which promoters and developers of Florida, in the decades after Reconstruction, engaged with a popular myth of the West as a means of recasting and selling their state to prospective settlers in the North and Midwest. The myth envisaged a cherished region to the west where worthy Americans could migrate and achieve social and economic independence away from the crowded confines of the East, or Europe. According to state immigration agents, land-promoters and other booster writers, Florida, although a Southern ex-Confederate state, offered precisely these 'western' opportunities for those hard-working Northerners seeking land and an opening for agrarian prosperity. However, the myth, which posited that, in the west, an individual's labour and thrift were rewarded with social and economic improvement, meshed awkwardly with the contemporary emergence of Florida as a popular winter destination for wealthy tourists and invalids seeking leisure and healthfulness away from the North. Yet it also reflected and reinforced promotional notions of racial improvement which would occur with an influx of enterprising Anglo-Americans, who would effectively displace the state's large African American population. In Florida, the myth of the West supported the linked post-Reconstruction processes of state development and racial subjugation
Bias modelling in evidence synthesis
Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the null by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods
- âŠ